Test & Performance

Quality on the assembly line

Secure deployments through extensive CI/CD pipelines

Christian Dangl

To achieve optimization in your PHP projects and avoid deployment problems, you’ll need a well-coordinated mix of tools, pipelines, and approaches.

“I can’t right now, I have to deploy a hotfix!”, Mark said in an upset tone. “Why, what’s the problem?”, one of his colleagues replied. “Tom—who’s on vacation—used a match expression everywhere! But that only exists in PHP 8 and we use 7.4!”

Mark would love to have prevented the rollout in the aftermath, saving himself the trouble of having to rebuild Tom’s changes while under pressure from management. Once the new version is finally ready to run, colleagues ask themselves the same questions that many others do: “How can I secure my software against problems like this?”

In this article, we’ll dedicate ourselves to the most important tools and steps for this task. We’ll use PHP checks, static code analysis, unit tests, and more. For developing long-term quality, we’ll also use fully automated pipelines that prevent faulty code from being adopted in our Main Branch. Our focus is on individual steps and their connections up to the big picture from PHP’s point of view. Areas such as unit testing with JEST etc. can and should be integrated into a fully comprehensive pipeline. There are so many possibilities for individual tools and frameworks, so we’ll only loosely touch upon them so that they’re at least executable and functional.

IPC NEWSLETTER

All news about PHP and web development

 

PHP’s weaknesses

I’m sure many of us can relate to Mark’s emotional state. Unfortunately, this is just one of the many potential issues in the daily life of a developer. There are many error sources that can be caused by different versions, approaches, or more. In order to prevent this, first, we need to understand why these problems happen in the first place. PHP has become quite a powerful language, but it wasn’t always. Unfortunately, PHP’s origins lie in a poorly designed language. Because of its inconsistencies, it’s easy to create bad, unstable code. The lack of type safety alone can quickly lead to unplanned chaos and hidden bugs. For instance, in older PHP versions an argument of a function could only be the variable name without a data type. Now it’s finally possible to resolve this unsafe definition and declare it explicitly. The impact on the project when migrating PHP versions could be fatal.

While migrating a very large application from PHP 5.6 to PHP 7.x, as well as many migrations regarding type safety, etc., we found that in the end, our functions’ signatures weren’t consistently optimized for NULL values. Despite our well-intentioned “NULL queries” in our functions, they already crashed when they were called with FATAL exception, as NULL wasn’t allowed without direct specification. As you can imagine, these NULL cases were only in certain situations, in various areas in our application that were overlooked during testing. When I say “we found that in the end…” I mean in the live system. If the NULL checks were done before the functions were called, then there wouldn’t be any problem and the code would run as intended. This should show us that the real source of errors is developers themselves. But you can deliver code that’s perfect for the moment in every “flexible” language! You just need the right steps and tools. And that’s done best in combination with pipelines.

Hello Pipeline

Pipelines contain various subtasks that are executed fully automatically, either sequentially or in parallel. A pipeline always has a single result, which is either positive or—because of an error in a task—negative. This makes pipelines perfect for everything from unit tests and static code analysis to deployments and post-deployment tasks. A pipeline is built quickly. But as always, quickly doesn’t also mean stable. In the beginning, there’s always the question, “Where do I want my pipeline to run?” Depending on the repository hoster, the in-house service can be used for pipelines or an external service connected to the repositories. Bitbucket, GitHub, GitLab and co. now offer smooth integrations for running pipelines. If you want to be independent from them or focus even more on pipelines, there are companies like Buddy (https://buddy.works) dedicated to pipelines and automation.

Once you’ve made your decision, the concept for your pipelines can begin. There are basically no limits for their construction and use. There are two different basic kinds of pipelines: CI and CI/CD. CI (Continuous Integration) tries to return new code changes back into the main thread as often as possible. Focusing on test automation, attempts are made at uncovering new errors or other problems as early as the continuous integration of the new code. This kind of pipeline can be automatically executed after successfully merging a pull request, for example. But it’s also possible for open PRs to check in advance if the submitted code meets our quality requirements. CI/CD, (CD stands for Continuous Delivery or Continuous Deployment), goes one step further. With Continuous Delivery, changes are available directly for live deployment, if they pass the necessary tests and the pipeline’s analyses. With Continuous Deployment, changes are even fully automatically loaded onto the production system after successful tests. The basis for both deployment methods is a widespread and strict Continuous Integration pipeline that should try uncovering as many errors in advance as possible. For their application, we need some processes and tools that must first be integrated into our software.

YOU LOVE PHP?

Explore the PHP Core Track

 

In the beginning, there was software

Unit tests, static code analysis, and other operations need to be prepared and embedded in our application in the best way possible. They can be used in pipelines, but they have no direct connection to them. They provide opportunities for measuring the quality of our application. It’s always important to have the best possible coverage of different kinds of tests. Our error concerning the NULL checks should already be discovered here. The number of tools utilized is growing quickly. So developers don’t lose sight of the forest for the trees, they should always pay attention to the tools’ user-friendliness. Developers can be touchy when it comes to testing and quality control. Only a few really enjoy putting their own code to the test. So it’s crucial that finding and fixing errors in a pipeline is easy and straightforward for developers. Let’s say we create a pull request that starts an extensive pipeline. After a long wait, we find the first error and the pipeline breaks. We fix the problem. Then we wait—error—and the pipeline breaks again. These waiting times can frustrate and exhaust developers. There’s often the reason why in open source software, I don’t make a pull request at the end.

For this problem, a makefile with a few simple commands can act as a solution. With it, each tool should be callable in the configuration, as intended for the software. So, there could be a make test for unit tests, such as make phpstan for running PHPStan, and so on. Since these commands bring the correct configuration, they can immediately be reused for pipelines. This creates transparency and provides a perfect abstraction layer for easily sharing tools or configurations centrally. If you want to make it even more palatable for developers, you can offer commands like make pr or make review that prepare code changes for a pull request (PHP CS-Fixer with Fix Mode etc.). Or you can execute the actual pipeline on the developer’s machine. Now that all our preparations are in place, it’s time to integrate our tools and build our pipeline piece by piece.

Abstraction layer with makefiles

Handling projects today isn’t a simple matter anymore. What do I need to install? How do I start the tests? This is not only annoying, but it also tempts you to do some things less often. One way of counteracting this problem is to introduce an abstraction layer with makefiles. This makes creating and executing commands at the command line level relatively easy. It also has the advantage of being able to exchange technologies and configurations centrally without having to attack pipelines. For instance, the following options are available for the developer, but they can also be used in a CI/CD pipeline:

  • make install (installs PROD dependencies)
  • make dev (installs DEV dependencies)
  • make build (starts SASS compiler, builds artifacts…)
  • make tests (starts PHPUnit + Jest…)
  • make phpstan (runs PHPStan)

PHP syntax checks

We’ll start building our pipeline with a few simple syntax checks of the direct PHP binary. This lets us check for missing semicolons or other syntax-related errors without an additional framework. It’s a perfect, easy introduction before we continue with unit tests and other verifications. Because the PHP linter php -l can only interpret and check one file, we’ll use the find function to search for all *.php files. Then the results are piped to our linter command. Using -n 1 we tell xargs to execute one command process per result. To make sure that we only check our own code and are leaving out large directories like the well-known node_modules folder, we can ignore certain paths when searching for files. Whether or not this is really necessary or desirable depends on the project. Last but not least, we can use -P to start several parallel processes to get some performance out of it. The following example searches all PHP files in the current directory (and subdirectories), except for node_modules and vendor, and starts four parallel PHP linter processes each.

find . -name '*.php' -not -path "./node_modules/*" -not -path "
./vendor/*" | xargs -n 1 -P4 php -l

We could now store this, for instance, as make phpcheck in the makefile easily usable for developers.

PHP Minimum Compatibility

As a manufacturer of frameworks or plug-ins, it’s extremely important to pay attention to the PHP versions your software supports. But even with a simple web application rollout, it could quickly happen that you roll out code with a feature that’s not available on the server yet due to an older PHP version. To prevent this problem in the automation chain, you can use the framework phpcompatibility/php-compatibility. Based upon friendsofphp/php-cs-fixer, it provides an extension for checking the compatibility of a minimal PHP version using the command line. Using the PHP-CS fixer installed with Composer with this extension is very easy, but both packages need to be installed and correctly configured. Once this is done, the line below can be used to check if our software still supports PHP 5.6 as a minimum requirement. Additionally, many folders can be ignored during this check. It’s perfect for legacy projects to have unit tests based on a more recent PHP version while the production code is still compatible up to PHP 5.6.

php vendor/bin/phpcs -p --ignore=*/Tests*,*/OtherFolder/* 
--standard=PHPCompatibility --extensions=php --runtime-set testVersion 5.6`

In order to use PHP-CS-Fixer and PHP-Compatibility, both packages need to be installed prior.

composer require --dev friendsofphp/php-cs-fixercomposer require 
--dev phpcompatibility/php-compatibility

In order for PHP-Compatibility to be recognized by CS-Fixer and for it to be specified as the default for our checks, we must store the path accordingly. This can be done either manually or automatically with scripts in the composer.json section. After installation or an update, the code in Listing 1 checks if PHP-CS-Fixer exists (installation only as Dev-Dependency) and automatically configures the corresponding path for PHP-Compatability. It does not interact with the developer (perfect for pipelines), so nothing happens when installing the production dependencies!

"scripts": {
  "post-install-cmd": [
    "[ ! -f vendor/bin/phpcs ] || vendor/bin/phpcs --config-set installed_paths vendor/phpcompatibility/php-compatibility"
  ],
  "post-update-cmd": [
    "[ ! -f vendor/bin/phpcs ] || vendor/bin/phpcs --config-set installed_paths vendor/Qphpcompatibility/php-compatibility"
  ]
}

Unit Tests

Now that our code’s validity has been verified based on syntax and PHP version, we can turn our attention to our application’s functionality. Unit tests play an important role here. This step is essential for a company to accept unit tests, as shown from experience in various teams and developers. All beginnings are difficult, especially when you’re forced to write tests. But experience also shows that there’s greater acceptance towards creating new tests if preparations have already been made. At any rate, a few tests must exist and starting the test suite needs to be easy and fast. Another motivational factor is the automatic execution of these tests in pipelines. If this has already been prepared, a test suite just needs to be extended. This is a much lower conceptual hurdle for developers. The following lines install PHPUnit as a dev dependency and starts it with the initial phpunit.xml configuration file to be created.

IPC NEWSLETTER

All news about PHP and web development

 

 

composer require --dev phpunit/phpunit
php vendor/bin/phpunit --configuration=phpunit.xml

The configuration file can be created either manually or with PHPUnit commands. The simple example in Listing 2 shows an execution definition that loads all of the tests from the ./Tests/PHPUnit folder. The default autoload.php in the vendor directory (Composer) is used as the bootstrap file for autoloader, etc. So, all classes in your project should be found in the unit tests.

<phpunit ...
         bootstrap="vendor/autoload.php"
         ...>
 
  <testsuites>
    <testsuite name="My Test Suite">
      <directory>./Tests/PHPUnit</directory>
    </testsuite>
  </testsuites>
 
</phpunit>

With simple make tests, every developer could start unit tests without much effort. Pipelines are not just successive commands, but they can also bring a narrative with them. That means that in our process—if the checks up until now have been successful—our software has a correct, executable syntax that delivers our expected results in terms of logic and functionality at the lowest level. After this is guaranteed, we should check other things in our approach, like codestyles or perform static analyses. Depending on your taste, this can be done differently. Parallelization of these steps also has its appeal.

PHPStan

PHPStan focuses on finding errors without actually having to execute the code. All files and classes that need to be checked are analyzed for different error sources. Here, PHPStan differentiates between different levels, with level 8 being the highest—and sometimes most annoying—but safest level. In addition to level configurations, you can also set individual settings for checks and rules. You can even implement your own rules, such as pointing out an incorrect copyright annotation (for open source software). Installation is done with Composer.

composer require --dev phpstan/phpstan

Once installed, it’s time to configure PHPStan. This is done with a phpstan.neon file. This configuration includes the directories we will analyze, ignored directories, and bootstrap files for autoloader, additional settings, rules, and more. Listing 3 shows a simple example of a  Level 8 check for the current directory. Files in the Resources and vendor folders are ignored.

parameters:
 
  level: 8
  paths:
    - .
  excludes_analyse:
    - Resources/*
    - vendor/*

PHPStan can be easily executed with the analyze command and the configuration file. The results are clearly visible in the terminal output.

php vendor/bin/phpstan analyse -c .phpstan.neon

PHP-CS-Fixer

The PHP Coding Standards Fixer verifies your code against a selected coding standard. You can choose between PSR-1, PSR-3 and many more. It even supports community styles like Symfony. As the name suggests, the fixer can also automatically modify, optimize, and fix your source code. The tool is perfect if you work in teams and want to enforce a uniform standard, but it’s also highly recommended for lone warriors. Installation with Composer is very simple:

composer require --dev friendsofphp/php-cs-fixer

A php_cs.php file is used to configure the CS-Fixer. In it, you can set all settings regarding rules, caching, finding files, and more. The example in Listing 4 shows a relatively simple, executable variant. The focus is mainly on specifying the folders that will be ignored, such as vendor.

<?php
return PhpCsFixer\Config::create()
  ->setUsingCache(false)
  ->setRules([
    'array_syntax' => ['syntax' => 'short'],
    'ordered_imports' => true,
  ])
  ->setFinder(
    PhpCsFixer\Finder::create()
      ->exclude(['.git', '.github', 'vendor'])
      ->in(__DIR__)
  );

Execution takes place either as a dry run for purely analytical purposes, or directly with an integrated, automatic fixer. The recommendation in the pipeline is clearly on the dry run, since the focus is on the analysis. But a make pr command could automatically optimize the files with Fixer.

# Analysis with --dry-run
php vendor/bin/php-cs-fixer fix --config=./.php_cs.php --dry-run
# Fixing problems automatically
php vendor/bin/php-cs-fixer fix --config=./.php_cs.php

Integration Tests/E2E

After the previous steps have all been successfully completed, we’ll move on to the top-tier testing class. While this isn’t directly related to PHP, it’s an essential means of testing a PHP (or other) application for operation. Automated front-end testing requires a unique level of skill in both approach and stabilization. You also need a stable running environment, preferably one that’s automated in the cloud system where the pipelines are running. This can be easier or more difficult to implement depending on the project, but it always pays off. Plug-in and framework developers might have it easier here. For instance, if you’re developing a plug-in for a platform like Shopware, you can access ready-made Docker images from dockware.io. It’s enough to start the container, install the plug-in, and then you’ll have a simple localhost environment with demo data running that you can test against. This can also be done in parallel in different versions of Shopware.

 

For your own projects, booting up an environment is necessary, as well as importing databases, images, and other data. Of course, this should comply with the GDPR’s groundwork for anonymized data. Often, the simpler variant is tested against a separately installed test server that the new software version has been installed on. Whichever framework you use to run the tests is up to you. Whether it’s Cypress, Codeception, Ranorex, or something else, the important thing is that you have your tests under control in the long run. It’s recommended that you offer the lowest possible threshold for developers. For instance, you can use Cypress and makefiles to create a way to quickly launch the Cypress graphical interface in the correct configuration with make open-ui. Or you can run tests directly at the command line level using make run.

Next steps

There are still a lot of additional tasks that need to be done in pipelines. This includes creating .env files with sensitive data, performing database migrations with Doctrine Migrations, or additional tasks like configuring plug-ins, settings, clearing caches, and more. While the PHP application’s quality is now assured, additional project-dependent steps may be needed in the area of pipeline optimization. The following is, and remains, particularly important: After your pipeline has run through and your software is released, everything should be ready to go. So, pipelines are regarded as first-class citizens subject to constant adjustments and optimizations in order to offer secure deployments in the future.

Conclusion

Increasing the long-term quality in a PHP project isn’t done with just a few clicks. Preventing the variety of issues that PHP can cause in advance requires a perfectly coordinated combination of tools, pipelines, approaches, and developer awareness. But once you’ve configured this and made it available to your teams with simple tools (makefiles, pipelines, etc.) then nothing can stand in the way of successive optimization and reduced deployment problems. Once the strict pipeline has grown and test coverage is sufficient, even concepts like Continuous Deployment can be meaningfully applied in the project.

Links & Literature

[1] https://www.phpunit.de

[2] https://github.com/phpstan/phpstan

[3] https://github.com/FriendsOfPHP/PHP-CS-Fixer

[4] https://github.com/PHPCompatibility/PHPCompatibility

[5] https://www.dockware.io

[6] https://www.cypress.io

Top Articles About Test & Performance

Stay tuned!

Register for our newsletter

Behind the Tracks of IPC

PHP Core
Best practices & applications

General Web Development
Broader web development topics

Test & Performance
Software testing and performance improvements

Agile & People
Getting agile right is so important

Software Architecture
All about PHP frameworks, concepts &
environments

DevOps & Deployment
Learn about DevOps and transform your development pipeline

Content Management Systems
Sessions on content management systems

#slideless (pure coding)
See how technology really works

Web Security
All about
web security

PUSH YOUR CODE FURTHER